Egocentric 3D human pose estimation with a single head-mounted fisheye camera has recently attracted attention due to its numerous applications in virtual and augmented reality. Existing methods still struggle in challenging poses where the human body is highly occluded or is closely interacting with the scene. To address this issue, we propose a scene-aware egocentric pose estimation method that guides the prediction of the egocentric pose with scene constraints. To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network. Next, we propose a scene-aware pose estimation network that projects the 2D image features and estimated depth map of the scene into a voxel space and regresses the 3D pose with a V2V network. The voxel-based feature representation provides the direct geometric connection between 2D image features and scene geometry, and further facilitates the V2V network to constrain the predicted pose based on the estimated scene geometry. To enable the training of the aforementioned networks, we also generated a synthetic dataset, called EgoGTA, and an in-the-wild dataset based on EgoPW, called EgoPW-Scene. The experimental results of our new evaluation sequences show that the predicted 3D egocentric poses are accurate and physically plausible in terms of human-scene interaction, demonstrating that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.
translated by 谷歌翻译
我们提出了EgoreRender,一种用于渲染由安装在盖帽或VR耳机上的可穿戴的专门鱼眼相机捕获的人的全身神经头像的系统。我们的系统使演员的质感性谱系景观和她的动作从任意虚拟相机位置。从如下视图和大型扭曲,渲染来自此类自主特征的全身头像具有独特的挑战。我们通过将渲染过程分解为几个步骤,包括纹理综合,构建和神经图像翻译来解决这些挑战。对于纹理合成,我们提出了EGO-DPNET,一个神经网络,其在输入的鱼眼图像和底层参数体模型之间倾少密集的对应,并从自我输入输入中提取纹理。此外,为了编码动态外观,我们的方法还学习隐式纹理堆栈,捕获横跨姿势和视点的详细外观变化。对于正确的姿态生成,我们首先使用参数模型从Egentric视图估算身体姿势。然后,我们通过将参数模型投影到用户指定的目标视点来综合外部释放姿势图像。我们接下来将目标姿势图像和纹理组合到组合特征图像中,该组合特征图像使用神经图像平移网络转换为输出彩色图像。实验评估表明,Egorenderer能够产生佩戴Egocentric相机的人的现实自由观点的头像。几个基线的比较展示了我们的方法的优势。
translated by 谷歌翻译
我们提出了神经演员(NA),一种用于从任意观点和任意可控姿势的高质量合成人类的新方法。我们的方法是基于最近的神经场景表示和渲染工作,从而从仅从2D图像中学习几何形状和外观的表示。虽然现有的作品令人兴奋地呈现静态场景和动态场景的播放,具有神经隐含方法的照片 - 现实重建和人类的渲染,特别是在用户控制的新颖姿势下,仍然很困难。为了解决这个问题,我们利用一个粗体模型作为将周围的3D空间的代理放入一个规范姿势。神经辐射场从多视图视频输入中了解在规范空间中的姿势依赖几何变形和姿势和视图相关的外观效果。为了综合高保真动态几何和外观的新颖视图,我们利用身体模型上定义的2D纹理地图作为预测残余变形和动态外观的潜变量。实验表明,我们的方法能够比播放的最先进,以及新的姿势合成来实现更好的质量,并且甚至可以概括到新的姿势与训练姿势不同的姿势。此外,我们的方法还支持对合成结果的体形控制。
translated by 谷歌翻译
我们提出了一种新的姿势转移方法,用于从由一系列身体姿势控制的人的单个图像中综合人类动画。现有的姿势转移方法在申请新颖场景时表现出显着的视觉伪影,从而导致保留人的身份和纹理的时间不一致和失败。为了解决这些限制,我们设计了一种构成神经网络,预测轮廓,服装标签和纹理。每个模块化网络明确地专用于可以从合成数据学习的子任务。在推理时间,我们利用训练有素的网络在UV坐标中产生统一的外观和标签,其横跨姿势保持不变。统一的代表提供了一个不完整的且强烈指导,以响应姿势变化而产生外观。我们使用训练有素的网络完成外观并呈现背景。通过这些策略,我们能够以时间上连贯的方式综合人类动画,这些动画可以以时间上连贯的方式保护人的身份和外观,而无需在测试场景上进行任何微调。实验表明,我们的方法在合成质量,时间相干性和泛化能力方面优于最先进的。
translated by 谷歌翻译
Modern telecom systems are monitored with performance and system logs from multiple application layers and components. Detecting anomalous events from these logs is key to identify security breaches, resource over-utilization, critical/fatal errors, etc. Current supervised log anomaly detection frameworks tend to perform poorly on new types or signatures of anomalies with few or unseen samples in the training data. In this work, we propose a meta-learning-based log anomaly detection framework (LogAnMeta) for detecting anomalies from sequence of log events with few samples. LoganMeta train a hybrid few-shot classifier in an episodic manner. The experimental results demonstrate the efficacy of our proposed method
translated by 谷歌翻译
The coverage of different stakeholders mentioned in the news articles significantly impacts the slant or polarity detection of the concerned news publishers. For instance, the pro-government media outlets would give more coverage to the government stakeholders to increase their accessibility to the news audiences. In contrast, the anti-government news agencies would focus more on the views of the opponent stakeholders to inform the readers about the shortcomings of government policies. In this paper, we address the problem of stakeholder extraction from news articles and thereby determine the inherent bias present in news reporting. Identifying potential stakeholders in multi-topic news scenarios is challenging because each news topic has different stakeholders. The research presented in this paper utilizes both contextual information and external knowledge to identify the topic-specific stakeholders from news articles. We also apply a sequential incremental clustering algorithm to group the entities with similar stakeholder types. We carried out all our experiments on news articles on four Indian government policies published by numerous national and international news agencies. We also further generalize our system, and the experimental results show that the proposed model can be extended to other news topics.
translated by 谷歌翻译
Many problems can be viewed as forms of geospatial search aided by aerial imagery, with examples ranging from detecting poaching activity to human trafficking. We model this class of problems in a visual active search (VAS) framework, which takes as input an image of a broad area, and aims to identify as many examples of a target object as possible. It does this through a limited sequence of queries, each of which verifies whether an example is present in a given region. We propose a reinforcement learning approach for VAS that leverages a collection of fully annotated search tasks as training data to learn a search policy, and combines features of the input image with a natural representation of active search state. Additionally, we propose domain adaptation techniques to improve the policy at decision time when training data is not fully reflective of the test-time distribution of VAS tasks. Through extensive experiments on several satellite imagery datasets, we show that the proposed approach significantly outperforms several strong baselines. Code and data will be made public.
translated by 谷歌翻译
We present XKD, a novel self-supervised framework to learn meaningful representations from unlabelled video clips. XKD is trained with two pseudo tasks. First, masked data reconstruction is performed to learn modality-specific representations. Next, self-supervised cross-modal knowledge distillation is performed between the two modalities through teacher-student setups to learn complementary information. To identify the most effective information to transfer and also to tackle the domain gap between audio and visual modalities which could hinder knowledge transfer, we introduce a domain alignment strategy for effective cross-modal distillation. Lastly, to develop a general-purpose solution capable of handling both audio and visual streams, a modality-agnostic variant of our proposed framework is introduced, which uses the same backbone for both audio and visual modalities. Our proposed cross-modal knowledge distillation improves linear evaluation top-1 accuracy of video action classification by 8.4% on UCF101, 8.1% on HMDB51, 13.8% on Kinetics-Sound, and 14.2% on Kinetics400. Additionally, our modality-agnostic variant shows promising results in developing a general-purpose network capable of handling different data streams. The code is released on the project website.
translated by 谷歌翻译
The vision community has explored numerous pose guided human editing methods due to their extensive practical applications. Most of these methods still use an image-to-image formulation in which a single image is given as input to produce an edited image as output. However, the problem is ill-defined in cases when the target pose is significantly different from the input pose. Existing methods then resort to in-painting or style transfer to handle occlusions and preserve content. In this paper, we explore the utilization of multiple views to minimize the issue of missing information and generate an accurate representation of the underlying human model. To fuse the knowledge from multiple viewpoints, we design a selector network that takes the pose keypoints and texture from images and generates an interpretable per-pixel selection map. After that, the encodings from a separate network (trained on a single image human reposing task) are merged in the latent space. This enables us to generate accurate, precise, and visually coherent images for different editing tasks. We show the application of our network on 2 newly proposed tasks - Multi-view human reposing, and Mix-and-match human image generation. Additionally, we study the limitations of single-view editing and scenarios in which multi-view provides a much better alternative.
translated by 谷歌翻译
The use of multilingual language models for tasks in low and high-resource languages has been a success story in deep learning. In recent times, Arabic has been receiving widespread attention on account of its dialectal variance. While prior research studies have tried to adapt these multilingual models for dialectal variants of Arabic, it still remains a challenging problem owing to the lack of sufficient monolingual dialectal data and parallel translation data of such dialectal variants. It remains an open problem on whether the limited dialectical data can be used to improve the models trained in Arabic on its dialectal variants. First, we show that multilingual-BERT (mBERT) incrementally pretrained on Arabic monolingual data takes less training time and yields comparable accuracy when compared to our custom monolingual Arabic model and beat existing models (by an avg metric of +$6.41$). We then explore two continual pre-training methods-- (1) using small amounts of dialectical data for continual finetuning and (2) parallel Arabic to English data and a Translation Language Modeling loss function. We show that both approaches help improve performance on dialectal classification tasks ($+4.64$ avg. gain) when used on monolingual models.
translated by 谷歌翻译